Optimal nonparametric inference via deep neural network
نویسندگان
چکیده
Deep neural network is a state-of-art method in modern science and technology. Much statistical literature have been devoted to understanding its performance nonparametric estimation, whereas the results are suboptimal due redundant logarithmic sacrifice. In this paper, we show that such log-factors not necessary. We derive upper bounds for L 2 minimax risk estimation. Sufficient conditions on architectures provided become optimal (without log-sacrifice). Our proof relies an explicitly constructed estimator based tensor product B-splines. also asymptotic distributions relating hypothesis testing procedure. The procedure further proved as under suitable architectures.
منابع مشابه
Relation Classification via Convolutional Deep Neural Network
The state-of-the-art methods used for relation classification are primarily based on statistical machine learning, and their performance strongly depends on the quality of the extracted features. The extracted features are often derived from the output of pre-existing natural language processing (NLP) systems, which leads to the propagation of the errors in the existing tools and hinders the pe...
متن کاملDeep Network Regularization via Bayesian Inference of Synaptic Connectivity
Deep neural networks (DNNs) often require good regularizers to generalize well. Currently, state-of-the-art DNN regularization techniques consist in randomly dropping units and/or connections on each iteration of the training algorithm. Dropout and DropConnect are characteristic examples of such regularizers, that are widely popular among practitioners. However, a drawback of such approaches co...
متن کاملBayesian Nonparametric and Parametric Inference
This paper reviews Bayesian Nonparametric methods and discusses how parametric predictive densities can be constructed using nonparametric ideas.
متن کاملScaling Nonparametric Bayesian Inference via Subsample-Annealing
We describe an adaptation of the simulated annealing algorithm to nonparametric clustering and related probabilistic models. This new algorithm learns nonparametric latent structure over a growing and constantly churning subsample of training data, where the portion of data subsampled can be interpreted as the inverse temperature β(t) in an annealing schedule. Gibbs sampling at high temperature...
متن کاملOptimal deep neural networks for sparse recovery via Laplace techniques
This paper introduces Laplace techniques to design an optimal neural network for estimation of simplex-valued sparse vectors from compressed measurements. To this end, we recast the problem of MMSE estimation (w.r.t. a prescribed uniform input distribution) as centroid computation of some polytope that is an intersection of the simplex and an affine subspace determined by the measurements. Due ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Journal of Mathematical Analysis and Applications
سال: 2022
ISSN: ['0022-247X', '1096-0813']
DOI: https://doi.org/10.1016/j.jmaa.2021.125561